11 research outputs found

    Design and Logistics for a LAN Management Course

    Get PDF
    This paper outlines an implementation for a hands-on Local Area Network (LAN) management course in the undergraduate curriculum of a department of Information Systems. Three major problems are addressed: faculty preparation, resource allocation, and course design/logistics. There are special difficulties in designing such a course because of the hardware preparation involved and the assignment of supervisor rights to a large number of students simultaneously for a single file server. A course outline and a sample practical exercise are given. This course has been successfully implemented and received a favorable student response

    Can the US Minimum Data Set Be Used for Predicting Admissions to Acute Care Facilities?

    Get PDF
    This paper is intended to give an overview of Knowledge Discovery in Large Datasets (KDD) and data mining applications in healthcare particularly as related to the Minimum Data Set, a resident assessment tool which is used in US long-term care facilities. The US Health Care Finance Administration, which mandates the use of this tool, has accumulated massive warehouses of MDS data. The pressure in healthcare to increase efficiency and effectiveness while improving patient outcomes requires that we find new ways to harness these vast resources. The intent of this preliminary study design paper is to discuss the development of an approach which utilizes the MDS, in conjunction with KDD and classification algorithms, in an attempt to predict admission from a long-term care facility to an acute care facility. The use of acute care services by long term care residents is a negative outcome, potentially avoidable, and expensive. The value of the MDS warehouse can be realized by the use of the stored data in ways that can improve patient outcomes and avoid the use of expensive acute care services. This study, when completed, will test whether the MDS warehouse can be used to describe patient outcomes and possibly be of predictive value

    Can the US Minimum Data Set Be Used for Predicting Admissions to Acute Care Facilities?

    Get PDF
    This paper is intended to give an overview of Knowledge Discovery in Large Datasets (KDD) and data mining applications in healthcare particularly as related to the Minimum Data Set, a resident assessment tool which is used in US long-term care facilities. The US Health Care Finance Administration, which mandates the use of this tool, has accumulated massive warehouses of MDS data. The pressure in healthcare to increase efficiency and effectiveness while improving patient outcomes requires that we find new ways to harness these vast resources. The intent of this preliminary study design paper is to discuss the development of an approach which utilizes the MDS, in conjunction with KDD and classification algorithms, in an attempt to predict admission from a long-term care facility to an acute care facility. The use of acute care services by long term care residents is a negative outcome, potentially avoidable, and expensive. The value of the MDS warehouse can be realized by the use of the stored data in ways that can improve patient outcomes and avoid the use of expensive acute care services. This study, when completed, will test whether the MDS warehouse can be used to describe patient outcomes and possibly be of predictive value

    Behavior Analysis of Team Performance: A Case Study of Membership Replacement

    No full text
    Team performance, behavioral healthA three-person team performance task (TPT) is described, and evaluative results are presented under conditions of individual fixed ratios required to complete a work component and a team fixed ratio required to complete a work component. After an initial team performed the task over four successive days, a member was replaced with a novitiate, and the newly formed team performed the task over four successive days thereafter. The results showed differences in performance metrics between the individual and team ratio conditions and between the original and the reformed teams. When communications among team members were permitted at the start of the last two sessions of the study, individual contributions by the three members to the team ratio requirement were equivalent during the final session. The results show the sensitivity of the task to individual and team performance requirements and to membership replacement. They also show the impact of tactical decision making on work distributions. The range of outcomes suggests the utility of this type of task to assess the status of a team and to act as a potential countermeasure to team fragmentation.Behavior Analysis of Team Performance: A Case Study of Membership Replacement Henry H. Emurian, Kip Canfield, & Joseph V. Brady Abstract A three-person team performance task (TPT) is described, and evaluative results are presented under conditions of individual fixed ratios required to complete a work component and a team fixed ratio required to complete a work component. After an initial team performed the task over four successive days, a member was replaced with a novitiate, and the newly formed team performed the task over four successive days thereafter. The results showed differences in performance metrics between the individual and team ratio conditions and between the original and the reformed teams. When communications among team members were permitted at the start of the last two sessions of the study, individual contributions by the three members to the team ratio requirement were equivalent during the final session. The results show the sensitivity of the task to individual and team performance requirements and to membership replacement. They also show the impact of tactical decision making on work distributions. The range of outcomes suggests the utility of this type of task to assess the status of a team and to act as a potential countermeasure to team fragmentation. Keywords: Team performance, behavioral health __________________________________________________________________________ The need to develop tools to assess and support the behavioral health of space-dwelling crews continues to be acknowledged by NASA (Suedfeld, Bootzin, Harvey, Leon, Musson, Oltmanns, & Paulus, 2010). In that regard, the term “behavioral health” encompasses a broad range of affective, social, and skilled individual and crew performances that must be sustained under the obviously stressful circumstances of long-duration spaceflight (Brady, 2007; Emurian & Brady, 2007). The detection of impending performance degradation necessitates the consideration of innovative approaches to monitor and measure both individual and team performances that realistically relate to the operational status of a crew. The introduction of effective countermeasures to such degradation is complementary to detection, and potential solutions to these two challenges will benefit from a technology that can integrate both considerations within a common conceptual framework with respect to task performance. A three-person team performance task (TPT) was proposed as a tool to diagnose the status of a crew (Emurian, Canfield, Roma, Gasior, Brinson, Hienz, Hursh, & Brady, 2009), and the rationale of its design, from the perspective of behavior analysis, and an evaluation of its effectiveness have been reported (Emurian, Canfield, Roma, Brinson, Gasior, Hienz, Hursh, & Brady, in press). The initial evaluations were based upon having subjects perform the task for fixed time periods (e.g., 12 min), with instructions to maximize performance effectiveness. Although providing important feedback regarding the properties of the task and performance metrics associated with individual and team performances, a more realistic diagnostic scenario would require a crew to complete a given task without regard to temporal constraints. Accordingly, the present extension of the task implements a fixed-ratio requirement on performance accuracy at the level of the individual team member and at the level of the team. The present report is a case study of the evaluation of such an extension under conditions of the replacement of an established team member with a novitiate. The context of this study includes analyses of group membership replacement previously undertaken within a continuously programmed environment (Emurian,Brady, Ray, Meyerhoff, & Mougey, 1984). The Behavior Analyst Today Volume 11, Number 3 162 Method Subjects Four UMBC undergraduate students volunteered to participate in response to an announcement posted on the student listserv. Volunteers were directed to read the information posted on the web (http://nasa1.ifsm.umbc.edu/tpt/). The study was approved by UMBC’s Institutional Review Board, and informed consent was obtained at the time of each daily session. Each participant was paid $30 in cash at the completion of a session. Table 1 presents demographic details about the four subjects collected before Session 1 for Table 1 subjects 1, 2, and 3 and before Session 5 for subject 2*. Two rating scales were administered to assess each subject’s experience with computer games and overall computer experience. Each scale was a 10- point scale, where the anchors were 1 = No Experience (I am a novice.) to 10 = Extensive Experience (I am an expert.). Notable, perhaps, are the comparatively low ratings by S2 for both game and computer experience. Subjects 1, 2, and 3 reported being acquainted prior to the study. Subject 2 was replaced at Session 5 of the study by S2*, and the replacement reported having no prior acquaintanceship with the other two subjects. The subjects were instructed not to discuss the task between sessions, and post-session debriefings always confirmed that practice. Team Performance Task (TPT) The TPT was designed for use by three-person groups, and the prototype has been described in detail elsewhere (Emurian et al., in press). Figure 1 presents a screen shot of the display presented to a subject (in this case, User1, the designation for S1). The display was similar for all three subjects, who operated the task from three separate computer terminals. In the current configuration, communications were not permitted among the three subjects during a session, beyond task-related actions during a session, to be described. The server for the TPT, which is deployed on the Internet, was running on a port behind the UMBC firewall. In brief, the task requires the subject to capture and drag a Resource block at the top and deposit it on the target without striking a barrier. For an accurate deposit, the color of the Resource block deposited must match the color of the target, which changes from trial to trial. Between the resource blocks and the target are nine rows. Each row contains a barrier, and the barrier changes position in a row between 10 – S# Status Major Sex Age Game Experience Computer Experience 1 Junior Health Administration F 19 8 8 2 Junior Social Work F 19 2 6 3 Junior Health Administration M 20 7 7 2* Sophomore Biology F 18 8 9 The Behavior Analyst Today Volume 11, Number 3 163 20 sec. Each subject controls the visibility of a barrier within three of the nine rows, and the rows assigned to each subject are determined randomly at the start of each task component, to be described, within a session. In Figure 1, four barriers are highlighted and visible to all subjects, and two barriers (Barrier6 and Barrier8) are dim. Barriers 6 and 8 are visible in that dim state only to User1. To make a barrier visible in the highlighted state to the other two subjects, the cursor must be positioned over the barrier for .25, 1, or 4 sec, depending on the component of the session, with the left mouse key held down. In Figure 1, Barrier = 1000 indicates that the cursor must be positioned over a barrier, with the mouse down, for 1 sec (i.e., 1000 msec) to highlight the barrier and make the barrier visible to all subjects. For a correct deposit on the target, which involves dragging and depositing an identically colored Resource block without striking a barrier, one point is added to the Target score. The corresponding “scoreboard” of points is also updated. If a barrier is struck, whether highlighted, dim, or invisible, one point is subtracted from the score, which can become negative. Whenever such a barrier “hit” occurs, the associated Resource block being dragged is eliminated from further play, and a new Resource block at the top has to be dragged to the Target block. Optimal performance, then, requires cooperation among the Figure 1. A screen shot of the TPT display for User1. This version of the task requires the team to accumulate 60 points to complete the component, irrespective of the relative contributions of the three team members to that ratio requirement. The hold time to reveal a barrier is 1 sec (1000 msec). The Behavior Analyst Today Volume 11, Number 3 164 three team members to highlight their respective dim barriers so that Resource block movements by other team members can avoid hitting barriers, thereby permitting Target counts to be maximized. At the bottom right of the display is a button labeled “Request.” When a subject clicks that button with the mouse, a text message is presented at the top of the displays of the other two teammates. For example, if User3 clicks that button, the following message appears to the other two subjects: “User3 has requested that you reveal your barriers.” Successive messages appear in a scrollable list, and all messages on a subject’s display are removed whenever that subject initiates movement of a Resource block. Figure 2 presents a screen shot of the display for User2 after User3has clicked the “Request” button. The message appears in the top left of the display. Figure 2 also shows the scoreboard when there is a 20-point individual ratio in effect, and the hold time (“delay”) to reveal a barrier is .25 sec (250 msec). Figure 2. A screen shot of the TPT display for User2. In the upper left corner is a message indicating that User3 has requested that the other team members reveal their barriers. This example shows that User2 has accumulated 1 point. This version of the task requires each team member to accumulate 20 points to complete the component for all three members. The hold time to reveal a barrier is .25 sec (250 msec). The Behavior Analyst Today Volume 11, Number 3 165 Figure 3 presents a screen shot of the display for User2 after a barrier was hit. A message appears in the upper left corner of the display. In this case, the barrier in row 2 (Barrier2) was hit. The Target block is dimmed until the next capture and movement of a Resource block. The display also shows the decrement of 1 point from the Target score and from the scoreboard for User2. Procedure The study took place within a small, rectangular, windowless laboratory containing four tables, each table holding a PC, with two PCs positioned back-to-back along two walls. The subjects were seated a few feet apart such that they did not face each other. The task was presented to each subject using a Dell Optiplex 745 PC having a 17-inch screen. The first author supervised the study and remained in the laboratory with the three subjects and the research assistant. The subjects were told not to speak to each other during performance on the task, and the only direct communications permitted were through requests to reveal barriers, a feature of the task described above. Figure 3. A screen shot of the display for User2 after a barrier was hit. A message appears in the upper left corner of the display. In this case, the barrier in row 2 (Barrier2) was hit. The target is dimmed until the next movement of a Resource block. In comparison to Figure 2, the display shows the results of a 1-point decrement from the Target score and from the scoreboard. The Behavior Analyst Today Volume 11, Number 3 166 Each daily session consisted of six components. The components differed in terms of the hold time required to reveal a barrier (“barrier reveal delay”). The following six barrier reveal delays were presented within successive components in the following order for all eight sessions of the study: (1) .25 sec, (2) 1 sec, (3) 4 sec, (4) .25 sec, (5) 1 sec, and (6) 4 sec. These durations were chosen to match the range of delays evaluated previously (Emurian et al., in press). For the individual condition (I), each subject was required to accumulate 20 points to complete a component. For the team condition (T), the subjects were required to accumulate 60 points to complete a component, irrespective of the relative contributions by the individual team members to that requirement. A counterbalanced order of the two conditions was in effect across the eight sessions, respectively: (1) I – T, (2) T – I, (3) I – T, (4) T – I, (5) I – T, (6) T – I, (7) I – T, and (8) T – I. During each condition, there were three components, with each component having one of the three barrier reveal delays as presented. For example, during Session 1, the individual ratio condition was in effect first. There were three components in that condition. The first component required each subject to accumulate 20 points, and the barrier reveal delay was .25 sec. The second component also required each subject to accumulate 20 points, and the barrier reveal delay was 1 sec. The third component required each subject to accumulate 20 points, and the barrier reveal delay was 4 sec. Under the team condition, the sequence of barrier reveal delays was identical to the individual condition sequence, but the team was required to accumulate 60 points to complete each component, irrespective of the contributions of the individual team members to that requirement. For the individual ratio components, a task completion message appeared on the display after all subjects had accumulated 20 points. It was not possible for a subject to accumulate more than 20 points during the individual ratio components, but all other features of the task continued to function. During the team ratio components, the completion message appeared after 60 points had been accumulated by the team. The study consisted of eight sessions, spaced a few days apart depending upon the schedules of the subjects. The first session was on 6/18/2010, and the eighth session was on 7/8/2010. Each session began sometime between 9 AM and 2 PM, and only one session was scheduled on a given day. Before the start of the first session, the task was explained to the subjects, and a practice session was administered with abbreviated parameters. Each subject was assigned a “user number” to be selected to start the task during each component across the conditions. As indicated above, the task automatically terminated when the ratio requirement in effect was completed. At the conclusion of each component within a session, the subjects completed the 6-item Perceived Cohesion Scale (PCS) (Salisbury, Carte, & Chidambaram, 2006), which yielded ratings of group Belonging and Morale, and the NASA Task Load Index1 (NASA-TLX), a measure of perceived workload (Cao, Chintamani, Pandya, & Ellis, 2009). Following the fourth session with the original team, S2 was replaced by a new member. That subject (S2, Table 1) was replaced by convenience, because her schedule at the time did not allow further participation in the study. At the fifth session, the new member (S2*, Table 1) joined the team. She reported having no prior acquaintanceship with the other subjects. The subjects introduced themselves by name and major, and this was followed by the practice session to allow the new subject to be familiarized with the task. Thereafter, four sessions took place that exactly replicated the first four sessions in the study with respect to the components and the conditions. However, prior to the beginning of Session 7, the team was instructed to spend time together to discuss the task and to consider ways to optimize their performance. The meeting lasted about 10 min. A similar meeting occurred prior to the beginning of Session 8. Other than these discussions in the laboratory, the subjects did not otherwise discuss the task, and they reported having no contact with each other between sessions. Footnote 1: http://humansystems.arc.nasa.gov/groups/TLX/ The Behavior Analyst Today Volume 11, Number 3 167 Results Individual data records will be presented, together with summaries of outcomes augmented by statistical inferences of orderliness. The analysis was undertaken with the Welch robust test for main effects, cellwise comparisons, and complex contrasts, with the presence of potential interaction effects being determined by graphical inspection of the outcomes. Where indicated, Dunnett’s T3 method was used for post-hoc pairwise comparisons, and p for other multiple comparisons was corrected with .05/a, where a = number of comparisons. Non-significant effects are not reported, to include effects of subject, delay, condition, and part, and they are not presented in the figures. For the analysis, the original team is designated as Part 1, and the reformed team is designated as Part 2. Because of the likely influence of participants on each other in this design, statistical tests were undertaken using between-subjects techniques, which are conservative in rejecting a null hypothesis in comparison to within-subjects techniques. Maxwell and Delaney (2004) was the reference source for the analysis, which was undertaken with SPSS. The figures are labeled with sec or msec for the barrier reveal delay components, hereafter referred to as “delay components” or “components,” depending upon available space on an axis. Points Figure 4 presents total points accumulated at the completion of each barrier reveal delay component during the team condition by each subject across the eight sessions. As indicated in the Procedure section, S2 was replaced at Session 5, and the reformed team held a brief meeting (10 min) to discuss their performance tactics before Session 7 and Session 8. The figure shows graphically that during Sessions 1 - 4, the three original team members did not contribute equally to the 60-point ratio in effect during the team condition. In that regard, the discrepancies among the subjects were greater during Sessions 1 and 4 in comparison to Sessions 2 and 3. It is notable, perhaps, that S3 showed the lowest point accumulation in seven of the 12 components in Sessions 1 - 4. Figure 4. Points accumulated by each subject at the completion of the team ratio condition across the three delay components of the eight sessions. S2 was replaced at Session 5. The Behavior Analyst Today Volume 11, Number 3 168 When S2 was replaced at Session 5, the impact on performance is graphically apparent. During the 1-sec and 4-sec delay components, S2’s point accumulation differed markedly from the other two subjects, and S2 showed the lowest point accumulation of the study during the 4-sec delay component. During Session 6, although S2’s point accumulation was lower than the other subjects during the .25-sec and 1-sec delay components, S2 showed the highest point accumulation of the study during the 4-sec component. The impact of the tactical meeting is graphically evident in the point distributions during Session 7. Over the three delay components, the final component (i.e., the 4-sec barrier reveal delay) shows that each team member contributed 20 points to the 60-point team ratio requirement. That distribution persisted during Session 8, in which 20 points were contributed by all team members within each of the three components. Although performance on this particular metric stabilized over the last two sessions, ratings of team cohesion remained diminished, as presented below. Durations of Session Components Figure 5 presents durations for each team member to complete the individual ratio across the three delay components for the eight sessions. Also presented are the durations for the team to complete the team ratio. Time to complete the ratios generally decreased across the sessions for individual and team ratios most notably in the 4-sec delay component. The shortest time to complete an individual ratio (i.e., 47 sec) was evidenced by S1 on Session 7 in the .25-sec delay component, and the longest time was also evidenced by S1 in the 4-sec delay component on Session 2 (592 sec). With respect to the team ratio, Figure 5 shows graphically that the duration increased across the three delay components for six of the eight sessions. When S2 was replaced at Session 5, the gains acquired over the early sessions appeared to carry over when the team was reformed, at least in comparison to the first two sessions. The skill acquired by S1 and S3 may have compensated for the novitiate’s inexperience with the task. Figure 5. Durations to complete individual ratios across the three delay components for the eight sessions. Also presented are the durations for the team to complete the team ratio. S2 was replaced at Session 5. Black circle = original team, gray circle = reformed team. The Behavior Analyst Today Volume 11, Number 3 169 There was insufficient evidence to support differences among the subjects on the mean durations to complete
    corecore